Goto

Collaborating Authors

 meta plan


UK campaigners raise alarm over report of Meta plan to use automation for risk checks

The Guardian

Internet safety campaigners have urged the UK's communications watchdog to limit the use of artificial intelligence in crucial risk assessments after a report that Mark Zuckerberg's Meta was planning to automate checks. Ofcom said it was "considering the concerns" raised by the campaigners' letter, after a report last month that up to 90% of all risk assessments at the owner of Facebook, Instagram and WhatsApp would soon be carried out by AI. Social media platforms are required under the UK's Online Safety Act to gauge how harm could take place on their services and how they plan to mitigate those potential harms – with a particular focus on protecting child users and preventing illegal content from appearing. The risk assessment process is viewed as key aspect of the act. In a letter to Ofcom's chief executive, Melanie Dawes, organisations including the Molly Rose Foundation, the NSPCC and the Internet Watch Foundation described the prospect of AI-driven risk assessments as a "retrograde and highly alarming step".


MPO: Boosting LLM Agents with Meta Plan Optimization

Xiong, Weimin, Song, Yifan, Dong, Qingxiu, Zhao, Bingchan, Song, Feifan, Wang, Xun, Li, Sujian

arXiv.org Artificial Intelligence

Recent advancements in large language models (LLMs) have enabled LLM-based agents to successfully tackle interactive planning tasks. However, despite their successes, existing approaches often suffer from planning hallucinations and require retraining for each new agent. To address these challenges, we propose the Meta Plan Optimization (MPO) framework, which enhances agent planning capabilities by directly incorporating explicit guidance. Unlike previous methods that rely on complex knowledge, which either require significant human effort or lack quality assurance, MPO leverages high-level general guidance through meta plans to assist agent planning and enables continuous optimization of the meta plans based on feedback from the agent's task execution. Our experiments conducted on two representative tasks demonstrate that MPO significantly outperforms existing baselines. Moreover, our analysis indicates that MPO provides a plug-and-play solution that enhances both task completion efficiency and generalization capabilities in previous unseen scenarios.


Meta is reportedly working on humanoid robots that help with chores

Engadget

If you look at your Roomba with disgust, thinking about what a far cry it is from the Jetsons' Rosey the Robot, help is on the way. Bloomberg reported on Friday that Meta plans to leverage its advances in AI and augmented reality to build a platform for futuristic humanoid robots that can help with household chores like folding laundry. Meta is reportedly creating a new team within its Reality Labs hardware division, which handles Quest VR headsets and the long-term Orion AR glasses project. Although it will build robot hardware during development, Meta's long-term goal is more like Android, where Google makes the software platform that almost all of the industry (outside of Apple) uses. Meta would make the underlying sensors, AI and software for other companies to put inside their hardware.


CaPo: Cooperative Plan Optimization for Efficient Embodied Multi-Agent Cooperation

Liu, Jie, Zhou, Pan, Du, Yingjun, Tan, Ah-Hwee, Snoek, Cees G. M., Sonke, Jan-Jakob, Gavves, Efstratios

arXiv.org Artificial Intelligence

In this work, we address the cooperation problem among large language model (LLM) based embodied agents, where agents must cooperate to achieve a common goal. Previous methods often execute actions extemporaneously and incoherently, without long-term strategic and cooperative planning, leading to redundant steps, failures, and even serious repercussions in complex tasks like search-and-rescue missions where discussion and cooperative plan are crucial. To solve this issue, we propose Cooperative Plan Optimization (CaPo) to enhance the cooperation efficiency of LLM-based embodied agents. Inspired by human cooperation schemes, CaPo improves cooperation efficiency with two phases: 1) meta-plan generation, and 2) progress-adaptive meta-plan and execution. In the first phase, all agents analyze the task, discuss, and cooperatively create a meta-plan that decomposes the task into subtasks with detailed steps, ensuring a long-term strategic and coherent plan for efficient coordination. In the second phase, agents execute tasks according to the meta-plan and dynamically adjust it based on their latest progress (e.g., discovering a target object) through multi-turn discussions. This progress-based adaptation eliminates redundant actions, improving the overall cooperation efficiency of agents. Experimental results on the ThreeDworld Multi-Agent Transport and Communicative Watch-And-Help tasks demonstrate that CaPo achieves much higher task completion rate and efficiency compared with state-of-the-arts.


Meta plans to more broadly label AI-generated content

Engadget

Meta says that its current approach to labeling AI-generated content is too narrow and that it will soon apply a "Made with AI" badge to a broader range of videos, audio and images. Starting in May, it will append the label to media when it detects industry-standard AI image indicators or when users acknowledge that they're uploading AI-generated content. The company may also apply the label to posts that fact-checkers flag, though it's likely to downrank content that's been identified as false or altered. The company announced the measure in the wake of an Oversight Board decision regarding a video that was maliciously edited to depict President Joe Biden touching his granddaughter inappropriately. The Oversight Board agreed with Meta's decision not to take down the video from Facebook as it didn't violate the company's rules regarding manipulated media.


Meta plans to ramp up labeling of AI-generated images across its platforms

Engadget

Meta plans to ramp up its labeling of AI-generated images across Facebook, Instagram and Threads to help make it clear that the visuals are artificial. It's part of a broader push to tamp down misinformation and disinformation, which is particularly significant as we wrangle with the ramifications of generative AI (GAI) in a major election year in the US and other countries. According to Meta's president of global affairs, Nick Clegg, the company has been working with partners from across the industry to develop standards that include signifiers that an image, video or audio clip has been generated using AI. "Being able to detect these signals will make it possible for us to label AI-generated images that users post to Facebook, Instagram and Threads," Clegg wrote in a Meta Newsroom post. "We're building this capability now, and in the coming months we'll start applying labels in all languages supported by each app."


Meta AI Introduces A New AI Technology Called 'Few-Shot Learner (FSL)' To Tackle Harmful Content

#artificialintelligence

For the training of AI models, a massive number of labeled data points or examples are required. Typically, the number of samples needed is tens of thousands to millions. Collection and labeling of these data can take several months. This manual collection and labeling delay the deployment of AI systems that can detect new types of harmful content over different social media platforms. To handle this issue, Meta has deployed a relatively new AI model called "Few-Shot Learner" (FSL) such that harmful contents can be detected even if enough labeled data is not available.